This IPython notebook explains a basic workflow two tables using py_entitymatching. Our goal is to come up with a workflow to match restaurants from Fodors and Zagat sites. Specifically, we want to maximize F1. The datasets contain information about the restaurants.
First, we need to import py_entitymatching package and other libraries as follows:
In [1]:
import sys
sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')
import py_entitymatching as em
import pandas as pd
import os
In [2]:
# Display the versions
print('python version: ' + sys.version )
print('pandas version: ' + pd.__version__ )
print('magellan version: ' + em.__version__ )
Matching two tables typically consists of the following three steps:
1. Reading the input tables
2. Blocking the input tables to get a candidate set
3. Matching the tuple pairs in the candidate set
In [3]:
# Get the paths
path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/fodors.csv'
path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/zagats.csv'
In [4]:
# Load csv files as dataframes and set the key attribute in the dataframe
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
In [5]:
print('Number of tuples in A: ' + str(len(A)))
print('Number of tuples in B: ' + str(len(B)))
print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))
In [6]:
A.head(2)
Out[6]:
In [7]:
B.head(2)
Out[7]:
In [8]:
# Display the keys of the input tables
em.get_key(A), em.get_key(B)
Out[8]:
In [9]:
# If the tables are large we can downsample the tables like this
A1, B1 = em.down_sample(A, B, 200, 1, show_progress=False)
len(A1), len(B1)
# But for the purposes of this notebook, we will use the entire table A and B
Out[9]:
Before we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching. py_entitymatching provides four different blockers: (1) attribute equivalence, (2) overlap, (3) rule-based, and (4) black-box. The user can mix and match these blockers to form a blocking sequence applied to input tables.
For the matching problem at hand, we know that two restaurants with different city names will not match. So we decide the apply blocking over names:
In [10]:
# Blocking plan
# A, B -- attribute equiv. blocker [city] --------------------|---> candidate set
In [11]:
# Create attribute equivalence blocker
ab = em.AttrEquivalenceBlocker()
# Block using city attribute
C1 = ab.block_tables(A, B, 'city', 'city',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone']
)
In [12]:
len(C1)
Out[12]:
The number of tuple pairs considered for matching is reduced to 10165 (from 176423), but we would want to make sure that the blocker did not drop any potential matches. We could debug the blocker output in py_entitymatching as follows:
In [13]:
# Debug blocker output
dbg = em.debug_blocker(C1, A, B, output_size=200)
In [14]:
# Display first few tuple pairs from the debug_blocker's output
dbg.head()
Out[14]:
From the debug blocker's output we observe that the current blocker drops quite a few potential matches. We would want to update the blocking sequence to avoid dropping these potential matches.
For the considered dataset, we know that for the restaurants to match the names must overlap between them. We could use overlap blocker for this purpose. Finally, we would want to union the outputs from the attribute equivalence blocker and the overlap blocker to get a consolidated candidate set.
In [15]:
# Updated blocking sequence
# A, B ------ attribute equivalence [city] -----> C1--
# |----> C
# A, B ------ overlap blocker [name] --------> C2--
In [16]:
# Create overlap blocker
ob = em.OverlapBlocker()
# Block tables using 'name' attribute
C2 = ob.block_tables(A, B, 'name', 'name',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone'],
overlap_size=1,
show_progress=False
)
len(C2)
Out[16]:
In [17]:
# Display first two rows from C2
C2.head(2)
Out[17]:
In [18]:
# Combine blocker outputs
C = em.combine_blocker_outputs_via_union([C1, C2])
In [19]:
len(C)
Out[19]:
We observe that the number of tuple pairs considered for matching is increased to 12530 (from 10165). Now let us debug the blocker output again to check if the current blocker sequence is dropping any potential matches.
In [20]:
# Debug again
dbg = em.debug_blocker(C, A, B)
In [21]:
# Display first few rows from the debugger output
dbg.head(3)
Out[21]:
We observe that the current blocker sequence does not drop obvious potential matches, and we can proceed with the matching step now. A subtle point to note here is, debugging blocker output practically provides a stopping criteria for modifying the blocker sequence.
In this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes. This typically involves the following five steps:
First, we randomly sample 450 tuple pairs for labeling purposes.
In [22]:
# Sample candidate set
S = em.sample_table(C, 450)
Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
In [23]:
# Label S
G = em.label_table(S, 'gold')
For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
In [24]:
# Load the pre-labeled data
path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/lbl_restnt_wf1.csv'
G = em.read_csv_metadata(path_G,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(G)
Out[24]:
In this step, we split the labeled data into two sets: development (I) and evaluation (J). Specifically, the development set is used to come up with the best learning-based matcher and the evaluation set used to evaluate the selected matcher on unseen data.
In [25]:
# Split S into development set (I) and evaluation set (J)
IJ = em.split_train_test(G, train_proportion=0.7, random_state=0)
I = IJ['train']
J = IJ['test']
Selecting the best learning-based matcher typically involves the following steps:
In [26]:
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
nb = em.NBMatcher(name='NaiveBayes')
Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
In [27]:
# Generate features
feature_table = em.get_features_for_matching(A, B)
In [28]:
# List the names of the features generated
feature_table['feature_name']
Out[28]:
In [29]:
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=feature_table,
attrs_after='gold',
show_progress=False)
In [30]:
# Display first few rows
H.head(3)
Out[30]:
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher.
In [31]:
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric='f1', random_state=0)
result['cv_stats']
Out[31]:
We observe that the best matcher is not maximizing F1. We debug the matcher to see what might be wrong. To do this, first we split the feature vectors into train and test.
In [32]:
# Split feature vectors into train and test
UV = em.split_train_test(H, train_proportion=0.5)
U = UV['train']
V = UV['test']
Next, we debug the matcher using GUI. For the purposes of this guide, we use random forest matcher for debugging purposes.
In [33]:
# Debug decision tree using GUI
em.vis_debug_rf(rf, U, V,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
target_attr='gold')
From the GUI, we observe that phone numbers seem to be an important attribute, but they are in different format. Current features does not capture and adding a feature incorporating this difference in format can potentially improve the F1 numbers.
In [34]:
def phone_phone_feature(ltuple, rtuple):
p1 = ltuple.phone
p2 = rtuple.phone
p1 = p1.replace('/','-')
p1 = p1.replace(' ','')
p2 = p2.replace('/','-')
p2 = p2.replace(' ','')
if p1 == p2:
return 1.0
else:
return 0.0
In [35]:
feature_table = em.get_features_for_matching(A, B)
em.add_blackbox_feature(feature_table, 'phone_phone_feature', phone_phone_feature)
Out[35]:
Now, we repeat extracting feature vectors (this time with updated feature table), imputing table and selecting the best matcher again using cross-validation.
In [36]:
H = em.extract_feature_vecs(I, feature_table=feature_table, attrs_after='gold', show_progress=False)
In [37]:
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric='f1', random_state=0)
result['cv_stats']
Out[37]:
Now, observe the best matcher is achieving a better F1. Let us stop here and proceed on to evaluating the best matcher on the unseen data (the evaluation set).
Evaluating the matching outputs for the evaluation set typically involves the following four steps:
As before, we convert to the feature vectors (using the feature table and the evaluation set)
In [38]:
# Convert J into a set of feature vectors using feature table
L = em.extract_feature_vecs(J, feature_table=feature_table,
attrs_after='gold', show_progress=False)
Now, we train the matcher using all of the feature vectors from the development set. For the purposes of this guide we use random forest as the selected matcher.
In [39]:
# Train using feature vectors from I
rf.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
target_attr='gold')
Next, we predict the matches for the evaluation set (using the feature vectors extracted from it).
In [40]:
# Predict on L
predictions = rf.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
append=True, target_attr='predicted', inplace=False)
Finally, we evaluate the accuracy of predicted outputs
In [41]:
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'gold', 'predicted')
em.print_eval_summary(eval_result)